Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15979-15995, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37610914

RESUMO

It has been made great progress on single image deraining based on deep convolutional neural networks (CNNs). In most existing deep deraining methods, CNNs aim to learn a direct mapping from rainy images to clean rain-less images, and their architectures are becoming more and more complex. However, due to the limitation of mixing rain with object edges and background, it is difficult to separate rain and object/background, and the edge details of the image cannot be effectively recovered in the reconstruction process. To address this problem, we propose a novel wavelet approximation-aware residual network (WAAR), wherein rain is effectively removed from both low-frequency structures and high-frequency details at each level separately, especially in low-frequency sub-images at each level. After wavelet transform, we propose novel approximation aware (AAM) and approximation level blending (ALB) mechanisms to further aid the low-frequency networks at each level recover the structure and texture of low-frequency sub-images recursively, while the high frequency network can effectively eliminate rain streaks through block connection and achieve different degrees of edge detail enhancement by adjusting hyperparameters. In addition, we also introduce block connection to enrich the high-frequency details in the high-frequency network, which is favorable for obtaining potential interdependencies between high- and low-frequency features. Experimental results indicate that the proposed WAAR exhibits strong performance in reconstructing clean and rain-free images, recovering real and undistorted texture structures, and enhancing image edges in comparison with the state-of-the-art approaches on synthetic and real image datasets. It shows the effectiveness of our method, especially on image edges and texture details.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37440374

RESUMO

Many single image super-resolution (SISR) methods that use convolutional neural networks (CNNs) learn the relationship between low-and high-resolution images directly, without considering the context structure and detail fidelity. This can limit the potential of CNNs and result in unrealistic, distorted edges and textures in the reconstructed images. A more effective approach is to incorporate prior knowledge about the image into the model to aid in image reconstruction. In this study, we propose a novel recurrent structure-preserving mechanism that innovatively uses the multiscale wavelet transform (WT) as an image prior, namely, wavelet pyramid recurrent structure-preserving attention network (WRSANet), to process both low-and high-frequency subnetworks at each level separately and recursively. We propose a novel structure scale preservation (SSP) architecture that differs from traditional WTs. This architecture allows us to incorporate and learn structure preservation subnetworks at each level. By using our proposed structure scale fusion (SSF) combined with inverse WT, we can recursively restore and preserve rich low-frequency image structure through the combination of SSP at various levels. Furthermore, we also propose novel low-to-high-frequency information transmission (L2HIT) and detail enhancement (DE) mechanisms to address the issue of detail distortion in high-frequency images by transferring information from structure preservation subnetworks. This allows us to preserve the low-frequency structure while reconstructing high-frequency details, improving detail fidelity and avoiding structural distortion. Finally, a joint loss function is also used to balance the fusion of low-and high-frequency information at different degrees, with hyperparameters being adjusted during training. The experimental results demonstrate that the proposed WRSANet achieves better performance and visual presentation than the state-of-the-art (SOTA) on synthetic and real datasets, especially in terms of context structure and texture details.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37028310

RESUMO

In brain-computer interface (BCI) work, how correctly identifying various features and their corresponding actions from complex Electroencephalography (EEG) signals is a challenging technology. However, most current methods do not consider EEG feature information in spatial, temporal and spectral domains, and the structure of these models cannot effectively extract discriminative features, resulting in limited classification performance. To address this issue, we propose a novel text motor-imagery EEG discrimination method, namely wavelet-based temporal-spectral-attention correlation coefficient (WTS-CC), to simultaneously consider the features and their weighting in spatial, EEG-channel, temporal and spectral domains in this study. The initial Temporal Feature Extraction (iTFE) module extracts the initial important temporal features of MI EEG signals. The Deep EEG-Channel-attention (DEC) module is then proposed to automatically adjust the weight of each EEG channel according to its importance, thereby effectively enhancing more important EEG channels and suppressing less important EEG channels. Next, the Wavelet-based Temporal-Spectral-attention (WTS) module is proposed to obtain more significant discriminative features between different MI tasks by weighting features on two-dimensional time-frequency maps. Finally, a simple discrimination module is used for MI EEG discrimination. The experimental results indicate that the proposed text WTS-CC method can achieve promising discrimination performance that outperforms the state-of-the-art methods in terms of classification accuracy, Kappa coefficient, F1 score, and AUC on three public datasets.


Assuntos
Interfaces Cérebro-Computador , Imaginação , Humanos , Encéfalo , Eletroencefalografia/métodos , Algoritmos
4.
Opt Express ; 31(3): 3606-3618, 2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36785349

RESUMO

The problem of image dehazing has received a great deal of attention in the computer vision community over the past two decades. Under haze conditions, due to the scattering of water vapor and dust particles in the air, the sharpness of the image is seriously reduced, making it difficult for many computer vision systems, such as those for object detection, object recognition, surveillance, driver assistance, etc. to do further process and operation. However, the previous dehazing methods usually have shortcomings such as poor brightness, color cast, removal of uncleanliness, halos, artifacts, and blurring. To address these problems, we propose a novel Structure-transferring Edge-enhanced Grid Dehazing Network (SEGDNet) in this study. An edge-preserving smoothing operator, a guided filter, is used to efficiently decompose the images into low-frequency image structure and high-frequency edges. The Low-frequency Grid Dehazing Subnetwork (LGDSn) is proposed to effectively preserve the low-frequency structure while dehazing. The High-frequency Edge Enhancement Subnetwork (HEESn) is also proposed to enhance the edges and details while removing the noise. The Low-and-High frequency Fusion Subnetwork (L&HFSn) is used to fuse the low-frequency and high-frequency results to obtain the final dehazed image. The experimental results on synthetic and real-world datasets demonstrate that our method outperforms the state-of-the-art methods in both qualitative and quantitative evaluations.

5.
Opt Express ; 30(23): 41279-41295, 2022 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-36366610

RESUMO

Pedestrian detection is an important research area and technology for car driving, gait recognition, and other applications. Although a lot of pedestrian detection techniques have been introduced, low-resolution imaging devices still exist in real life, so detection in low-resolution images remains a challenging problem. To address this issue, we propose a novel end-to-end Translation-invariant Wavelet Residual Dense Super-Resolution (TiWRD-SR) method to upscale LR images to SR images and then use Yolov4 for detection to address the low detection problem performance on low-resolution images. To make the enlarged SR image not only effectively distinguish the foreground and background of images but also highlight the characteristic structure of pedestrians, we decompose the image into low-frequency and high-frequency parts by stationary wavelet transform (SWT). The high- and low-frequency sub-images are trained through different network structures so that the network can reconstruct the high-frequency image edge information and the low-frequency image structure in a more detailed manner. In addition, a high-to-low branch information transmission (H2LBIT) is proposed to import high-frequency image edge information into the low-frequency network to make the reconstructed low-frequency structure more detailed. In addition, we also propose a novel loss function, which enables the SR network to focus on the reconstruction of image structure in the network by the characteristics of wavelet decomposition, thereby improving its detection performance. The experimental results indicate that the proposed TiWRD-SR can effectively improve detection performance.

6.
Opt Express ; 30(17): 31029-31043, 2022 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-36242195

RESUMO

It has been widely investigated for images taken through glass to remove unwanted reflections in deep learning. However, none of these methods have bad effects, but they all remove reflections in specific situations, and validate the results with their own datasets, e.g., several local places with strong reflections. These limitations will result in situations where real reflections in the world cannot be effectively eliminated. In this study, a novel Translation-invariant Context-retentive Wavelet Reflection Removal Network is proposed to address this issue. In addition to context and background, low-frequency sub-images still have a small amount of reflections. To enable background context retention and reflection removal, the low-frequency sub-images at each level are performed on the Context Retention Subnetwork (CRSn) after wavelet transform. Novel context level blending and inverse wavelet transform are proposed to remove reflections in low frequencies and retain background context recursively, which is of great help in restoring clean images. High-frequency sub-images with reflections are performed on the Detail-enhanced Reflection layer removal Subnetwork to complete reflection removal. In addition, in order to further separate the reflection layer and the transmission layer better, we also propose Detail-enhanced Reflection Information Transmission, through which the extracted features of reflection layer in high-frequency images can help the CRSn effectively separate the transmission layer and the reflection layer, so as to achieve the effects of removing reflection. The quantitative and visual experimental results on benchmark datasets demonstrate that the proposed method performs better than the state-of-the-art approaches.

7.
Healthcare (Basel) ; 10(7)2022 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-35885808

RESUMO

OBJECTIVE: Most neurological diseases are usually accompanied by changes in the oculomotor nerve. Analysis of different types of eye movements will help provide important information in ophthalmology, neurology, and psychology. At present, many scholars use optokinetic nystagmus (OKN) to study the physiological phenomenon of eye movement. OKN is an involuntary eye movement induced by a large moving surrounding visual field. It consists of a slow pursuing eye movement, called "slow phase" (SP), and a fast re-fixating saccade eye movement, called "fast phase" (FP). Non-invasive video-oculography has been used increasingly in eye movement research. However, research-grade eye trackers are often expensive and less accessible to most researchers. Using a low-cost eye tracker to quantitatively measure OKN eye movement will facilitate the general application of eye movement research. METHODS & RESULTS: We design an analytical algorithm to quantitatively measure OKN eye movements on a low-cost eye tracker. Using simple conditional filtering, accurate FP positions can be obtained quickly. The high-precision FP recognition rate is of great help for the subsequent calculation of eye movement analysis parameters, such as mean slow phase velocity (MSPV), which is beneficial as a reference index for patients with strabismus and other eye diseases. CONCLUSIONS: Experimental results indicate that the proposed method achieves faster and better results than other approaches, and can provide an effective algorithm to calculate and analyze the FP position of OKN waveforms.

8.
Healthcare (Basel) ; 9(8)2021 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-34442190

RESUMO

Neonatal jaundice is caused by high levels of bilirubin in the body, which most commonly appears within three days of birth among newborns. Neonatal jaundice detection systems can take pictures in different places and upload them to the system for judgment. However, the white balance problem of the images is often encountered in these detection systems. The color shift images induced by different light haloes will result in the system causing errors in judging the images. The true color of images is very important information when the detection system judges the jaundice value. At present, most systems adopt specific assumption methods and rely on color charts to adjust images. In this study, we propose a novel white balance method with dynamic threshold to screen appropriate feature factors at different color temperatures iteratively and make the adjustment results of different images close to the same. The experimental results indicate that the proposed method achieves superior results in comparison with several traditional approaches.

9.
IEEE Trans Image Process ; 30: 934-947, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33242306

RESUMO

Current deep learning methods seldom consider the effects of small pedestrian ratios and considerable differences in the aspect ratio of input images, which results in low pedestrian detection performance. This study proposes the ratio-and-scale-aware YOLO (RSA-YOLO) method to solve the aforementioned problems. The following procedure is adopted in this method. First, ratio-aware mechanisms are introduced to dynamically adjust the input layer length and width hyperparameters of YOLOv3, thereby solving the problem of considerable differences in the aspect ratio. Second, intelligent splits are used to automatically and appropriately divide the original images into two local images. Ratio-aware YOLO (RA-YOLO) is iteratively performed on the two local images. Because the original and local images produce low- and high-resolution pedestrian detection information after RA-YOLO, respectively, this study proposes new scale-aware mechanisms in which multiresolution fusion is used to solve the problem of misdetection of remarkably small pedestrians in images. The experimental results indicate that the proposed method produces favorable results for images with extremely small objects and those with considerable differences in the aspect ratio. Compared with the original YOLOs (i.e., YOLOv2 and YOLOv3) and several state-of-the-art approaches, the proposed method demonstrated a superior performance for the VOC 2012 comp4, INRIA, and ETH databases in terms of the average precision, intersection over union, and lowest log-average miss rate.

10.
IEEE Trans Image Process ; 30: 1369-1381, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33332268

RESUMO

Eye localization is undoubtedly crucial to acquiring large amounts of information. It not only helps people improve their understanding of others but is also a technology that enables machines to better understand humans. Although studies have reported satisfactory accuracy for frontal faces or head poses at limited angles, large head rotations generate numerous defects (e.g., disappearance of the eye), and existing methods are not effective enough to accurately localize eye centers. Therefore, this study makes three contributions to address these limitations. First, we propose a novel complete representation (CR) pipeline that can flexibly learn and generate two complete representations, namely the CR-center and CR-region, of the same identity. We also propose two novel eye center localization methods. This first method employs geometric transformation to estimate the rotational difference between two faces and an unknown-localization strategy for accurate transformation of the CR-center. The second method is based on image translation learning and uses the CR-region to train the generative adversarial network, which can then accurately generate and localize eye centers. Five image databases are employed to verify the proposed methods, and tests reveal that compared with existing methods, the proposed method can more accurately and robustly localize eye centers in challenging images, such as those showing considerable head rotation (both yaw rotation of -67.5° to +67.5° and roll rotation of +120° to -120°), complete occlusion of both eyes, poor illumination in addition to head rotation, head pose changes in the dark, and various gaze interaction.


Assuntos
Olho/diagnóstico por imagem , Movimentos da Cabeça/fisiologia , Cabeça , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Cabeça/diagnóstico por imagem , Cabeça/fisiologia , Humanos , Aprendizado de Máquina , Rotação
11.
Healthcare (Basel) ; 9(1)2020 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-33374811

RESUMO

Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a "slow phase (SP)" with eye movements in the same direction as the movement of the pattern and a "fast phase (FP)" with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.

12.
Medicine (Baltimore) ; 99(47): e23083, 2020 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-33217809

RESUMO

In the present study, we retrospectively analyzed the records of surgical confirmed kidney cancer with renal cell carcinoma pathology in the database of the hospital. We evaluated the significance of cancer size by assessing the outcomes of proposed adaptive active contour model (ACM). The aim of our study was to develop an adaptive ACM method to measure the radiological size of kidney cancer on computed tomography in the hospital patients. This paper proposed a set of medical image processing, applying images provided by the hospital and select the more obvious cases by the doctors, after the first treatment to remove noise image, and the kidney cancer contour would be circled by using the proposed adaptive ACM method. The results showed that the experimental outcome has highly similarity with the medical professional manual contour. The accuracy rate is higher than 99%. We have developed a novel adaptive ACM approach that well combines a knowledge-based system to contour the kidney cancer size in computed tomography imaging to support the clinical decision.


Assuntos
Neoplasias Renais/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Meios de Contraste , Feminino , Humanos , Neoplasias Renais/cirurgia , Masculino , Pessoa de Meia-Idade , Melhoria de Qualidade , Estudos Retrospectivos , Sensibilidade e Especificidade
13.
Front Neurol ; 10: 910, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31496988

RESUMO

Background: A predictive model can provide physicians, relatives, and patients the accurate information regarding the severity of disease and its predicted outcome. In this study, we used an automated machine-learning-based approach to construct a prognostic model to predict the functional outcome in patients with primary intracerebral hemorrhage (ICH). Methods: We retrospectively collected data on demographic characteristics, laboratory studies and imaging findings of 333 patients with primary ICH. The functional outcomes at the 1st and 6th months after ICH were defined by the modified Rankin scale. All of the attributes were used for preprocessing and for automatic model selection with Automatic Waikato Environment for Knowledge Analysis. Confusion matrix and areas under the receiver operating characteristic curves (AUC) were used to test the predictive performance. Results: Among the models tested, the random forest provided the best predictive performance for functional outcome. The overall accuracy for predicting the 1st month outcome was 83.1%, with 77.4% sensitivity and 86.9% specificity, and the AUC was 0.899. The overall accuracy for predicting the 6th month outcome was 83.9%, with 72.5% sensitivity and 90.6% specificity, and the AUC was 0.917. Conclusions: Using an automatic machine learning technique to predict functional outcome after ICH is feasible, and the random forest model provides the best predictive performance across all tested models. This prediction model may provide information regarding functional outcome for clinicians that will help provide appropriate medical care for patients and information for their caregivers.

14.
Biomed Mater Eng ; 26(3-4): 161-8, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26684888

RESUMO

Single-trial electroencephalogram (EEG) data are analyzed with similarity measure. Time-frequency representation is constructed from EEG signals. It is then weighted with t-statistics. Finally, the test data are discriminated with similarity measure. Compared with non-weighted version, the experimental results indicate that the proposed method obtains better results in classification accuracy.


Assuntos
Eletroencefalografia/métodos , Encéfalo/ultraestrutura , Interfaces Cérebro-Computador , Humanos , Modelos Teóricos , Processamento de Sinais Assistido por Computador , Interface Usuário-Computador
15.
Int J Neural Syst ; 25(8): 1550037, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26584583

RESUMO

An EEG classifier is proposed for application in the analysis of motor imagery (MI) EEG data from a brain-computer interface (BCI) competition in this study. Applying subject-action-related brainwave data acquired from the sensorimotor cortices, the system primarily consists of artifact and background removal, feature extraction, feature selection and classification. In addition to background noise, the electrooculographic (EOG) artifacts are also automatically removed to further improve the analysis of EEG signals. Several potential features, including amplitude modulation, spectral power and asymmetry ratio, adaptive autoregressive model, and wavelet fuzzy approximate entropy (wfApEn) that can measure and quantify the complexity or irregularity of EEG signals, are then extracted for subsequent classification. Finally, the significant sub-features are selected from feature combination by quantum-behaved particle swarm optimization and then classified by support vector machine (SVM). Compared with feature extraction without wfApEn on MI data from two data sets for nine subjects, the results indicate that the proposed system including wfApEn obtains better performance in average classification accuracy of 88.2% and average number of commands per minute of 12.1, which is promising in the BCI work applications.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Imaginação/fisiologia , Atividade Motora/fisiologia , Córtex Sensório-Motor/fisiologia , Análise de Ondaletas , Artefatos , Eletroculografia , Entropia , Feminino , Lógica Fuzzy , Humanos , Masculino , Máquina de Vetores de Suporte
16.
Clin EEG Neurosci ; 46(2): 119-25, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25392006

RESUMO

In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications.


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Interfaces Cérebro-Computador , Encéfalo/fisiologia , Eletroencefalografia/métodos , Animais , Abelhas , Biomimética/métodos , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estatística como Assunto
17.
Clin EEG Neurosci ; 46(2): 113-8, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25404753

RESUMO

An electroencephalogram recognition system considering phase features is proposed to enhance the performance of motor imagery classification in this study. It mainly consists of feature extraction, feature selection and classification. Surface Laplacian filter is used for background removal. Several potential features, including phase features, are then extracted to enhance the classification accuracy. Next, genetic algorithm is used to select sub-features from feature combination. Finally, selected features are classified by extreme learning machine. Compared with "without phase features" and linear discriminant analysis on motor imagery data from 2 data sets, the results denote that the proposed system achieves enhanced performance, which is suitable for the brain-computer interface applications.


Assuntos
Mapeamento Encefálico/métodos , Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Imaginação/fisiologia , Córtex Motor/fisiologia , Movimento/fisiologia , Potencial Evocado Motor/fisiologia , Feminino , Humanos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estatística como Assunto
18.
Clin EEG Neurosci ; 46(2): 94-9, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24599891

RESUMO

A novel method for motor imagery (MI) electroencephalogram (EEG) data classification is proposed in this study. Time-frequency representation is constructed by means of continuous wavelet transform from EEG signals and then weighted with 2-sample t-statistics, which are also used to automatically select the area of interest in advance. Finally, normalized cross-correlation is used to discriminate the test MI data. Compared with the nonweighted version on MI data, the experimental results indicate that the proposed system achieves satisfactory results in the applications of brain-computer interface (BCI).


Assuntos
Mapeamento Encefálico/métodos , Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Imaginação/fisiologia , Córtex Motor/fisiologia , Movimento/fisiologia , Potencial Evocado Motor/fisiologia , Humanos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estatística como Assunto , Análise de Ondaletas
19.
Clin EEG Neurosci ; 45(3): 163-8, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24048242

RESUMO

In this study, an electroencephalogram (EEG) analysis system combined with feature selection, is proposed to enhance the classification of motor imagery (MI) data. It principally comprises feature extraction, feature selection, and classification. First, several features, including adaptive autoregressive (AAR) parameters, spectral power, asymmetry ratio, coherence and phase-locking value are extracted for subsequent classification. A genetic algorithm is then used to select features from the combination of the aforementioned features. Finally, the selected features are classified by support vector machine (SVM). Compared with "without feature selection" and back-propagation neural network (BPNN) on MI data from 2 data sets, the proposed system achieves better classification accuracy and is suitable for the applications of brain-computer interface (BCI).


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Interfaces Cérebro-Computador , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Imaginação/fisiologia , Redes Neurais de Computação , Desempenho Psicomotor/fisiologia , Processamento de Sinais Assistido por Computador , Máquina de Vetores de Suporte , Lateralidade Funcional/fisiologia , Humanos , Neurorretroalimentação , Análise de Ondaletas
20.
Int J Neural Syst ; 23(6): 1350026, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24156669

RESUMO

In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.


Assuntos
Algoritmos , Artefatos , Interfaces Cérebro-Computador , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Córtex Somatossensorial/fisiologia , Humanos , Imaginação/fisiologia , Teoria Quântica , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA